51 research outputs found

    Fluctuation-Driven Flocking Movement in Three Dimensions and Scale-Free Correlation

    Get PDF
    Recent advances in the study of flocking behavior have permitted more sophisticated analyses than previously possible. The concepts of “topological distances” and “scale-free correlations” are important developments that have contributed to this improvement. These concepts require us to reconsider the notion of a neighborhood when applied to theoretical models. Previous work has assumed that individuals interact with neighbors within a certain radius (called the “metric distance”). However, other work has shown that, assuming topological interactions, starlings interact on average with the six or seven nearest neighbors within a flock. Accounting for this observation, we previously proposed a metric-topological interaction model in two dimensions. The goal of our model was to unite these two interaction components, the metric distance and the topological distance, into one rule. In our previous study, we demonstrated that the metric-topological interaction model could explain a real bird flocking phenomenon called scale-free correlation, which was first reported by Cavagna et al. In this study, we extended our model to three dimensions while also accounting for variations in speed. This three-dimensional metric-topological interaction model displayed scale-free correlation for velocity and orientation. Finally, we introduced an additional new feature of the model, namely, that a flock can store and release its fluctuations

    Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise

    No full text
    <div><p>Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as LĂ©vy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the LĂ©vy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions.</p></div

    侍漌慹ăȘæƒ…ć ±äž‹ă§ăźć€‹äœ“ăźæŒŻă‚‹èˆžă„ăŒäœœă‚‹çŸ€ă‚ŒăŒæ‹…ă†ć€šćȐ的ăȘæ©Ÿèƒœăźç ”ç©¶

    Get PDF
    科歊研究èČ»ćŠ©æˆäș‹æ„­ ç ”ç©¶æˆæžœć ±ć‘Šæ›žïŒšè‹„æ‰‹ç ”ç©¶(B)2015-2016èȘČ題ç•Șć· : 15K1606

    Parameters for Figs 4 to 7.

    No full text
    <p>Parameters for Figs <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0195988#pone.0195988.g004" target="_blank">4</a> to <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0195988#pone.0195988.g007" target="_blank">7</a>.</p

    Schematics of the two prediction methods.

    No full text
    <p>(A) Alignment prediction: the prediction point <i>Q</i><sub><i>T</i></sub> is the end point of the elongated neighbor’s direction vector <b><i>v</i></b><sub><i>i</i></sub>(<i>t</i>). (B) Anticipation: the prediction point <i>Q</i><sub><i>T</i></sub> is the predicted landing point if the turning rate of the agent’s neighbor is the same as its previous rate, that is, <i>dφ</i><sub><i>i</i></sub><sup><i>s</i></sup>(<i>t</i>) = <i>dφ</i><sub><i>i</i></sub><sup><i>s</i></sup>(<i>t +</i> 1). The value of <i>s</i> indicates how long the agent refers to its neighbor’s past movement in terms of a number of time steps. The quasi-alignment point <i>P</i><sub><i>T</i></sub> is the point where the segment from the agent to the prediction point <i>Q</i><sub><i>T</i></sub> crosses the boundary of the neighborhood.</p

    Schematics of attraction, alignment, and their generalization.

    No full text
    <p>(A) Schematic of attraction (left) and alignment (right). The agent (red dot) is attracted toward its neighbor’s (black dot) current position (<i>Q</i><sub>0</sub>: left) and infinite future position (<i>Q</i><sub>∞</sub>: right) (B) Schematic of attraction and alignment with a neighborhood. The circle indicates the boundary of the agent’s neighborhood. In the attraction case (left), the point <i>P</i><sub>0</sub> is precisely the same as <i>Q</i><sub>0</sub>. In the alignment case (right), the point <i>P</i><sub>∞</sub> reflects the information of <i>Q</i><sub>∞</sub> as the point on the boundary the agent would reach if it moved in a direction parallel with its neighbor’s motion. (C) Schematic of the generalization from <i>P</i><sub>0</sub> and <i>P</i><sub>∞</sub> to <i>P</i><sub><i>τ</i></sub> and <i>P</i><sub><i>T</i></sub>. <i>P</i><sub><i>τ</i></sub> is the crossing point of the agent’s extended direction vector at the boundary of the neighborhood. <i>P</i><sub><i>T</i></sub> is the point where the vector from the agent to a certain prediction point <i>Q</i><sub><i>T</i></sub> crosses the boundary of the neighborhood.</p

    Schematics of the showing short-term and long-term prediction on the circle.

    No full text
    <p>(A) The circle shows quasi-attractions as internal information for agent 1. The numbers under the dots are the tags for the agents. The bold blue arc is a covering set for <b><i>ξ</i></b><sub>1</sub>. (B) The circle shows quasi-alignments as external information for agent 1. Each prediction point determines <i>Θ</i><sub>i</sub>. The bold red arc is a covering set for <b><i>Θ</i></b><sub>1</sub>. (C) The complete chart of our algorithm. In the first step, find an agent which has its neighbors within its repulsion range () colored green) and apply the repulsion algorithm (the updated agent is shown in red). Repeat this until there is no agent which has its neighbors within its repulsion range. Then, construct a Delaunay triangulation from all agents’ positional information. Apply quasi-attraction (Cov(<b><i>ξ</i></b><sub><i>i</i></sub>(<i>t</i>)) colored blue) and quasi-alignment (Cov(<b><i>Θ</i></b><sub><i>i</i></sub>(<i>t</i>)) colored red) and take their intersection (<i>I</i><sub><i>i</i></sub>(<i>t</i>) colored yellow). The agent of interest (colored blue) selects its direction from this yellow interval.</p

    Trajectory and its recurrent time of mean degree and variance of edges weight.

    No full text
    <p><b>(a)</b> Trajectory of data through 600 steps (8 bits). The arrow in the graph indicates the direction of the process. After forming a circular trajectory, the system finally returns to the initial point. <b>(b)</b> Probability distribution of the recurrent time for mean value of the mean degree. Colors correspond to number of bits (7 bits: blue, 8bits: red, 9 bit: green). The graph exhibits a power law distribution (7-bits: <i>N</i> = 119900, scaling exponent <i>α</i> = 1.39, AIC weights of power law <i>w</i>(<i>p</i>) = 1.00; 8-bits: <i>N</i> = 119800, scaling exponent <i>α</i> = 1.40, AIC weights of power law <i>w</i>(<i>p</i>) = 1.00; 9-bits: <i>N</i> = 119859, scaling exponent <i>α</i> = 1.40, AIC weights of power law <i>w</i>(<i>p</i>) = 1.00). <b>(c)</b> Probability distribution of recurrent time for mean value of the variance of edge weights. Colors correspond to the number of bits (7 bits: blue, 8bits: red, 9 bit: green). In contrast to the mean degree, the graph exhibits an exponential. (7-bits: <i>N</i> = 119900, scaling exponent <i>λ</i> = 0.031, AIC weights of power law <i>w</i>(<i>e</i>) = 1.00; 8-bits: <i>N</i> = 119800, scaling exponent <i>λ</i> = 0.031, AIC weights of power law <i>w</i>(<i>e</i>) = 1.00; 9-bits: <i>N</i> = 119859, scaling exponent <i>λ</i> = 0.031, AIC weights of power law <i>w</i>(<i>e</i>) = 1.00).</p

    Details of data for the EL network.

    No full text
    <p><b>(a)</b> Distribution of the local cluster coefficient in the EL network. The horizontal axis corresponds to the local cluster coefficient (binned at 0.005) and the vertical axis corresponds to the proportion. <b>(b)</b> The proportion of edge weights for each Hamming distance. A large proportion of edge weights are concentrated within 3 bits. This tendency is the same in all cases. <b>(c)</b> The in-degree and out-degree degree distributions for the EL network. Both graphs show an exponential decay and the same scaling parameter λ<sub>in</sub> = 0.226 ± 0.026 and λ<sub>out</sub> = 0.226 ± 0.026. We use a maximum likelihood estimation method with a discrete distribution [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0127284#pone.0127284.ref027" target="_blank">27</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0127284#pone.0127284.ref028" target="_blank">28</a>].</p
    • 

    corecore